76 research outputs found

    First Person Perspective of Seated Participants Over a Walking Virtual Body Leads to Illusory Agency Over the Walking

    Get PDF
    Agency, the attribution of authorship to an action of our body, requires the intention to carry out the action, and subsequently a match between its predicted and actual sensory consequences. However, illusory agency can be generated through priming of the action together with perception of bodily action, even when there has been no actual corresponding action. Here we show that participants can have the illusion of agency over the walking of a virtual body even though in reality they are seated and only allowed head movements. The experiment (n = 28) had two factors: Perspective (1PP or 3PP) and Head Sway (Sway or NoSway). Participants in 1PP saw a life-sized virtual body spatially coincident with their own from a first person perspective, or the virtual body from third person perspective (3PP). In the Sway condition the viewpoint included a walking animation, but not in NoSway. The results show strong illusions of body ownership, agency and walking, in the 1PP compared to the 3PP condition, and an enhanced level of arousal while the walking was up a virtual hill. Sway reduced the level of agency. We conclude with a discussion of the results in the light of current theories of agency

    Over my fake body: body ownership illusions for studying the multisensory basis of own-body perception

    Get PDF
    Which is my body and how do I distinguish it from the bodies of others, or from objects in the surrounding environment? The perception of our own body and more particularly our sense of body ownership is taken for granted. Nevertheless, experimental findings from body ownership illusions (BOIs), show that under specific multisensory conditions, we can experience artificial body parts or fake bodies as our own body parts or body, respectively. The aim of the present paper is to discuss how and why BOIs are induced. We review several experimental findings concerning the spatial, temporal, and semantic principles of crossmodal stimuli that have been applied to induce BOIs. On the basis of these principles, we discuss theoretical approaches concerning the underlying mechanism of BOIs. We propose a conceptualization based on Bayesian causal inference for addressing how our nervous system could infer whether an object belongs to our own body, using multisensory, sensorimotor, and semantic information, and we discuss how this can account for several experimental findings. Finally, we point to neural network models as an implementational framework within which the computational problem behind BOIs could be addressed in the future

    A deep active inference model of the rubber-hand illusion

    Full text link
    Understanding how perception and action deal with sensorimotor conflicts, such as the rubber-hand illusion (RHI), is essential to understand how the body adapts to uncertain situations. Recent results in humans have shown that the RHI not only produces a change in the perceived arm location, but also causes involuntary forces. Here, we describe a deep active inference agent in a virtual environment, which we subjected to the RHI, that is able to account for these results. We show that our model, which deals with visual high-dimensional inputs, produces similar perceptual and force patterns to those found in humans.Comment: 8 pages, 3 figures, Accepted in 1st International Workshop on Active Inference, in Conjunction with European Conference of Machine Learning 2020. The final authenticated publication is available online at https://doi.org/10.1007/978-3-030-64919-7_1

    Body ownership increases the interference between observed and executed movements

    Get PDF
    When we successfully achieve willed actions, the feeling that our moving body parts belong to the self (i.e., body ownership) is barely required. However, how and to what extent the awareness of our own body contributes to the neurocognitive processes subserving actions is still debated. Here we capitalized on immersive virtual reality in order to examine whether and how body ownership influences motor performance (and, secondly, if it modulates the feeling of voluntariness). Healthy participants saw a virtual body either from a first or a third person perspective. In both conditions, they had to draw continuously straight vertical lines while seeing the virtual arm doing the same action (i.e., drawing lines) or deviating from them (i.e., drawing ellipses). Results showed that when there was a mismatch between the intended and the seen movements (i.e., participants had to draw lines but the avatar drew ellipses), motor performance was strongly “attracted” towards the seen (rather than the performed) movement when the avatar’s body part was perceived as own (i.e., first person perspective). In support of previous studies, here we provide direct behavioral evidence that the feeling of body ownership modulates the interference of seen movements to the performed movements

    User experience evaluation of human representation in collaborative virtual environments

    Get PDF
    Human embodiment/representation in virtual environments (VEs) similarly to the human body in real life is endowed with multimodal input/output capabilities that convey multiform messages enabling communication, interaction and collaboration in VEs. This paper assesses how effectively different types of virtual human (VH) artefacts enable smooth communication and interaction in VEs. With special focus on the REal and Virtual Engagement In Realistic Immersive Environments (REVERIE) multi-modal immersive system prototype, a research project funded by the European Commission Seventh Framework Programme (FP7/2007-2013), the paper evaluates the effectiveness of REVERIE VH representation on the foregoing issues based on two specifically designed use cases and through the lens of a set of design guidelines generated by previous extensive empirical user-centred research. The impact of REVERIE VH representations on the quality of user experience (UX) is evaluated through field trials. The output of the current study proposes directions for improving human representation in collaborative virtual environments (CVEs) as an extrapolation of lessons learned by the evaluation of REVERIE VH representation

    The body fades away: investigating the effects of transparency of an embodied virtual body on pain threshold and body ownership

    Get PDF
    The ffeelffing off “ownershffip” over an external dummy/vffirtual body (or body part) has been proven to have both physffiologffical and behavffioural consequences. For ffinstance, the vffisffion off an “embodffied” dummy or vffirtual body can modulate paffin perceptffion. However, the ffimpact off partffial or total ffinvffisffibffilffity off the body on physffiology and behavffiour has been hardly explored sffince ffit presents obvffious dffifficultffies ffin the real world. In thffis study we explored how body transparency affects both body ownershffip and paffin threshold. By means off vffirtual realffity, we presented healthy partfficffipants wffith a vffirtual co-located body wffith ffour dffifferent levels off transparency, whffile partfficffipants were tested ffor paffin threshold by ffincreasffing ramps off heat stffimulatffion. We ffound that the strength off the body ownershffip ffillusffion decreases when the body gets more transparent. Nevertheless, ffin the condffitffions where the body was semffi-transparent, hffigher levels off ownershffip over a see-through body resulted ffin an ffincreased paffin sensffitffivffity. Vffirtual body ownershffip can be used ffor the development off paffin management ffinterventffions. However, we demonstrate that provffidffing ffinvffisffibffilffity off the body does not ffincrease paffin threshold. Thereffore, body transparency ffis not a good strategy to decrease paffin ffin clffinffical contexts, yet thffis remaffins to be tested

    Owning an overweight or underweight body: distinguishing the physical, experienced and virtual body

    Get PDF
    Our bodies are the most intimately familiar objects we encounter in our perceptual environment. Virtual reality provides a unique method to allow us to experience having a very different body from our own, thereby providing a valuable method to explore the plasticity of body representation. In this paper, we show that women can experience ownership over a whole virtual body that is considerably smaller or larger than their physical body. In order to gain a better understanding of the mechanisms underlying body ownership, we use an embodiment questionnaire, and introduce two new behavioral response measures: an affordance estimation task (indirect measure of body size) and a body size estimation task (direct measure of body size). Interestingly, after viewing the virtual body from first person perspective, both the affordance and the body size estimation tasks indicate a change in the perception of the size of the participant’s experienced body. The change is biased by the size of the virtual body (overweight or underweight). Another novel aspect of our study is that we distinguish between the physical, experienced and virtual bodies, by asking participants to provide affordance and body size estimations for each of the three bodies separately. This methodological point is important for virtual reality experiments investigating body ownership of a virtual body, because it offers a better understanding of which cues (e.g. visual, proprioceptive, memory, or a combination thereof) influence body perception, and whether the impact of these cues can vary between different setups

    Beyond the Libet clock: modality variants for agency measurements

    Get PDF
    The Sense of Agency (SoA) refers to our capability to control our own actions and influence the world around us. Recent research in HCI has been exploring SoA to provide users an instinctive sense of “I did that” as opposed to “the system did that”. However, current agency measurements are limited. The Intentional Binding (IB) paradigm provides an implicit measure of the SoA. However, it is constrained by requiring high visual attention to a “Libet clock” onscreen. In this paper, we extend the timing stimulus through auditory and tactile cues. Our results demonstrate that audio timing through voice commands and haptic timing through tactile cues on the hand are alternative techniques to measure the SoA using the IB paradigm. They both address limitations of the traditional method (e.g., lack of engagement and visual demand). We discuss how our results can be applied to measure SoA in tasks involving different interactive scenarios common in HCI
    corecore